This is an evaluation of forecasts for Covid-19 case and death numbers in 32 European countries submitted to the European COVID-19 Forecast Hub. You can find more information on the European Forecast Hub Github page.
This report is intended as a basic evaluation of forecasts that helps modellers to better understand their performance. The structure and visualisations are likely subject to change in the future and we cannot rule out any mistakes. If you have questions or want to give feedback, please create an issue on our github repository. Note that all forecast dates have been changed to the corresponding submission date (every Monday) to allow easier comparison.
Forecast accuracy
Here is an overview of different evaluation metrics. See below for a more detailed explanation of the scoring metrics used. ‘Overall’ shows scores for all past weeks, ‘latest’ only spans the last 5-6 weeks of data. ‘Detailed’ represents the full data set that you can download for your own analysis.
Evaluation metrics
- Relative skill is a metric based on the weighted interval score (WIS) that is using a ‘pairwise comparison tournament’. All pairs of forecasters are compared against each other in terms of the weighted interval score. The mean score of both models based on the set of common targets for which both models have made a prediction are calculated to obtain mean score ratios. The relative skill is the geometric mean of these mean score ratios. Smaller values are better and a value smaller than one means that the model beats the average forecasting model.
- The weighted interval score is a proper scoring rule (meaning you can’t cheat it) suited to scoring forecasts in an interval format. It has three components: sharpness, underprediction and overprediction. Sharpness is the width of your prediction interval. Over- and underprediction only come into play if the prediction interval does not cover the true value. They are the absolute value of the difference between the upper or lower bound of your prediction interval (depending on whether your forecast is too high or too low).
- coverage deviation is the average difference between nominal and empirical interval coverage. Say your 50 percent prediction interval covers only 20 percent of all true values, then your coverage deviation is 0.5 - 0.2 = -0.3. The coverage deviation value in the table is calculated by averaging over the coverage deviation calculated for all possible prediction intervals. If the value is negative you have covered less then you should. If it is positve, then your forecasts could be a little more confident.
- bias is a measure between -1 and 1 that expresses your tendency to underpredict (-1) or overpredict (1). In contrast to the over- and underprediction components of the WIS it is bound between -1 and 1 and cannot go to infinity. It is therefore less susceptible to outliers.
- aem is the absolute error of your median forecasts. A high aem means your median forecasts tend to be far away from the true values.
Scores over time
Here you can see a visualisation of forecaster scores over time. The first tab shows the weighted interval score. Other tabs show the components of the interval score, sharpness (how narrow are forecasts - smaller is better), and penalties for underprediction and overprediction.
Austria

Weighted interval score

Overprediction

Underprediction

Sharpness

Belgium

Weighted interval score

Overprediction

Underprediction

Sharpness

Bulgaria

Weighted interval score

Overprediction

Underprediction

Sharpness

Croatia

Weighted interval score

Overprediction

Underprediction

Sharpness

Cyprus

Weighted interval score

Overprediction

Underprediction

Sharpness

Czechia

Weighted interval score

Overprediction

Underprediction

Sharpness

Denmark

Weighted interval score

Overprediction

Underprediction

Sharpness

Estonia

Weighted interval score

Overprediction

Underprediction

Sharpness

Finland

Weighted interval score

Overprediction

Underprediction

Sharpness

France

Weighted interval score

Overprediction

Underprediction

Sharpness

Germany

Weighted interval score

Overprediction

Underprediction

Sharpness

Greece

Weighted interval score

Overprediction

Underprediction

Sharpness

Hungary

Weighted interval score

Overprediction

Underprediction

Sharpness

Iceland

Weighted interval score

Overprediction

Underprediction

Sharpness

Ireland

Weighted interval score

Overprediction

Underprediction

Sharpness

Italy

Weighted interval score

Overprediction

Underprediction

Sharpness

Latvia

Weighted interval score

Overprediction

Underprediction

Sharpness

Liechtenstein

Weighted interval score

Overprediction

Underprediction

Sharpness

Lithuania

Weighted interval score

Overprediction

Underprediction

Sharpness

Luxembourg

Weighted interval score

Overprediction

Underprediction

Sharpness

Malta

Weighted interval score

Overprediction

Underprediction

Sharpness

Netherlands

Weighted interval score

Overprediction

Underprediction

Sharpness

Norway

Weighted interval score

Overprediction

Underprediction

Sharpness

Poland

Weighted interval score

Overprediction

Underprediction

Sharpness

Portugal

Weighted interval score

Overprediction

Underprediction

Sharpness

Romania

Weighted interval score

Overprediction

Underprediction

Sharpness

Slovakia

Weighted interval score

Overprediction

Underprediction

Sharpness

Slovenia

Weighted interval score

Overprediction

Underprediction

Sharpness

Spain

Weighted interval score

Overprediction

Underprediction

Sharpness

Sweden

Weighted interval score

Overprediction

Underprediction

Sharpness

Switzerland

Weighted interval score

Overprediction

Underprediction

Sharpness

United Kingdom

Weighted interval score

Overprediction

Underprediction

Sharpness

WIS decomposition
As mentionend above, the weighted interval score can be decomposed into three parts: sharpness (the amount of uncertainty around the forecast), overprediction and underprediction. This visualisation gives an impression of the distribution between these three forms of penalties for the different models.
overall

Austria

Belgium

Bulgaria

Croatia

Cyprus

Czechia

Denmark

Estonia

Finland

France

Germany

Greece

Hungary

Iceland

Ireland

Italy

Latvia

Liechtenstein

Lithuania

Luxembourg

Malta

Netherlands

Norway

Poland

Portugal

Romania

Slovakia

Slovenia

Spain

Sweden

Switzerland

United Kingdom

If you want to learn more about a model, you can go the the ‘data-processed’-folder of the European Forecast Hub github repository, select a model and access the metadata file with further information provided by the model authors.